Selection of Predictors by Standard Error Rule
نویسندگان
چکیده
منابع مشابه
Standard error as standard?
To the Editor: I am concerned about errors in simple statistical concepts in articles recently published in Circulation. First, I worry about the widespread use of the standard error of mean (SEM) to describe the variability of numerical data. Specifically, SEM was used in 30% of the clinical investigations and reports and in 76% of the basic science reports analyzed for this purpose (the first...
متن کاملA Standard Error: Distinguishing Standard Deviation From Standard Error
A recent Perspective in Nature issued a call for more transparency in the reporting of preclinical research (1). Although this article focused primarily on experimental design, it emphasized the need for improved reporting in the scientific literature. Within the context of preclinical studies, there have been discussions regarding the appropriate reporting of standard error (SE) and standard d...
متن کاملModel selection via standard error adjusted adaptive lasso
The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matr...
متن کاملFeature Selection by Nonparametric Bayes Error Minimization
This paper presents an algorithmic framework for feature selection, which selects a subset of features by minimizing the nonparametric Bayes error. A set of existing algorithms as well as new ones can be derived naturally from this framework. For example, we show that the Relief algorithm greedily attempts to minimize the Bayes error estimated by k-Nearest-Neighbor method. This new interpretati...
متن کاملAutomatic Parameter Selection by Minimizing Estimated Error
We address the problem of nding the parameter settings that will result in optimal performance of a given learning algorithm using a particular dataset as training data. We describe a \wrapper" method, considering determination of the best parameters as a discrete function optimization problem. The method uses bestrst search and crossvalidation to wrap around the basic induction algorithm: the ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Japanese journal of applied statistics
سال: 1999
ISSN: 0285-0370,1883-8081
DOI: 10.5023/jappstat.28.21